A Generative Model of Causal Cycles
نویسندگان
چکیده
Causal graphical models (CGMs) have become popular in numerous domains of psychological research for representing people’s causal knowledge. Unfortunately, however, the CGMs typically used in cognitive models prohibit representations of causal cycles. Building on work in machine learning, we propose an extension of CGMs that allows cycles and apply that representation to one real-world reasoning task, namely, classification. Our model’s predictions were assessed in experiments that tested both probabilistic and deterministic causal relations. The results were qualitatively consistent with the predictions of our model and inconsistent with those of an alternative model. We naturally reason about causally related events that occur in cycles. In economics, we expect that an increase in corporate hiring may increase consumers’ income and thus their demand for products, leading to a further increase in hiring. In meteorology, we expect that melting tundra due to global warming may release the greenhouse gas methane, leading to yet further warming. In psychology, we expect that clinicians will affect (hopefully help) their clients but also recognize the clients often affect the clinicians. Many psychologists investigate causal reasoning using a formalism known as Bayesian networks or causal graphical models (hereafter, CGMs). CGMs are one hypothesis for how people reason with causal knowledge. There are claims that causal learning amounts to acquiring the structure and/or parameters of a CGM (Cheng, 1997; Gopnik et al., 2004; Griffiths & Tenenbaum, 2005; 2009; Lu et al., 2008; Sobel et al., 2004; Waldmann et al., 1995). And, many models of causal reasoning assume that people honor the inferential rules that accompany CGMs (Holyoak et al., 2010; Lee & Holyoak, 2008; Rehder & Burnett, 2005; Rehder, 2003; 2009; Rehder & Kim, 2010; Shafto et al., 2008; Sloman & Lagnado, 2005; Waldmann & Hagmeyer, 2005). Unfortunately, because standard CGMs prohibit the presence of causal cycles, these models are unable to represent any of the cyclic events mentioned above. In this article, we take the initial steps to extend CGMs using an ‘unfolding’ trick from machine learning (Spirtes, 1993). We discuss the implications of this approach to one class of reasoning problem, namely classification. There is a rich literature on how causal knowledge among the features of a category changes how people classify. We first review evidence for causal cycles among category features and one proposal for how they affect classification. We then report two experiments that test that account. Finally, we present our own model for extending CGMs to represent cycles in people’s mental representations of categories. Unfolding Cycles One technique used to elicit people’s beliefs about the causal structure of categories is the theory drawing task. Subjects are presented with category features and asked to draw directed edges indicating how those features are causally related. These drawings show that causal cycles are common. For example, Kim and Ahn (2002) found that 65% of subjects’ representations of mental disorders such as depression included cycles. Sloman et al. found numerous cycles in subjects’ theories of everyday biological kinds and artifacts. In a first attempt to account for how cycles affect categorization, Kim et al. (2009) made two assumptions. The first was that causal knowledge affects classification in a manner specified by the dependency model (Sloman et al., 1998). On this account, features vary in their conceptual centrality, such that more central features provide more evidence for category membership. A feature’s centrality is a function of its number of (direct and indirect) dependents (i.e., effects). Quantitatively, feature i's centrality ci can be computed from the iterative equation, ci,t+1 = ∑dijcj,t (1) where ci,t is i's weight at iteration t and dij is the strength of the causal link between i and its dependent j. For example, if a category has three features X, Y, and Z, and X causes Y which causes Z, then when cZ,1 is initialized to 1 and each causal link has a strength of 2, after two iterations the centralities for X, Y, and Z are 4, 2, and 1. That is, feature X is more important to category membership than Y which is more important than Z. Qualitatively, the dependency model predicts this because X has two dependents (Y and Z), Y has one (Z), and Z has none. Kim et al.’s second assumption was that people reason with a simplified representation of cycles. Two reasons were provided for this assumption. First, because variables rarely cause each other constantly and simultaneously, it is likely that people assume that they influence each other in discrete time steps. Second, because it is implausible that people represent time steps extending into infinity, only a limited number of steps are likely to be considered. For example, consider the category in Fig. 1A in which feature C causes feature E and features X and Y are related in a causal cycle. Fig. 1B shows the cycle “unfolded” by one time step. The assumption is that in generation 1, X and Y mutually influenced one another, resulting in their states in generation 2 (X2 and Y2). Kim et al. proposed that feature importance would correspond to the predictions of the dependency model applied to the unfolded representation in Fig. 1B,
منابع مشابه
CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training
We propose an adversarial training procedure for learning a causal implicit generative model for a given causal graph. We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph. We consider the application of generating faces based on given binary labe...
متن کاملModeling Causal Learning Using Bayesian Generic Priors on Generative and Preventive Powers
We present a Bayesian model of causal learning that incorporates generic priors on distributions of weights representing potential powers to either produce or prevent an effect. These generic priors favor necessary and sufficient causes. Across three experiments, the model explains the systematic pattern of human judgments observed for questions regarding support for a causal link, for both gen...
متن کاملSequential Causal Learning in Humans and Rats
Recent experiments (Beckers, De Houwer, Pineño, & Miller, 2005;Beckers, Miller, De Houwer, & Urushihara, 2006) have shown that pretraining with unrelated cues can dramatically influence the performance of humans in a causal learning paradigm and rats in a standard Pavlovian conditioning paradigm. Such pretraining can make classic phenomena (e.g. forward and backward blocking) disappear entirely...
متن کاملVoice-based Age and Gender Recognition using Training Generative Sparse Model
Abstract: Gender recognition and age detection are important problems in telephone speech processing to investigate the identity of an individual using voice characteristics. In this paper a new gender and age recognition system is introduced based on generative incoherent models learned using sparse non-negative matrix factorization and atom correction post-processing method. Similar to genera...
متن کاملتأثیر آموزش مبتنی بر الگوی طراحی یادگیری زایشی بر میزان یادگیری دانشجویان رشته پرستاری در درس فیزیولوژی
Introduction: Utilizing traditional educational methods does not meet today’s educational needs; Modern educational systems are enabled with new methods of teaching that enrich the teaching- learning process. The purpose of this study was to evaluate the effect of instruction based generative learning design model on nursing student's Physiology learning. Methods: In this study, the pr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011